7 research outputs found
Scaling Data-Constrained Language Models
The current trend of scaling language models involves increasing both
parameter count and training dataset size. Extrapolating this trend suggests
that training dataset size may soon be limited by the amount of text data
available on the internet. Motivated by this limit, we investigate scaling
language models in data-constrained regimes. Specifically, we run a large set
of experiments varying the extent of data repetition and compute budget,
ranging up to 900 billion training tokens and 9 billion parameter models. We
find that with constrained data for a fixed compute budget, training with up to
4 epochs of repeated data yields negligible changes to loss compared to having
unique data. However, with more repetition, the value of adding compute
eventually decays to zero. We propose and empirically validate a scaling law
for compute optimality that accounts for the decreasing value of repeated
tokens and excess parameters. Finally, we experiment with approaches mitigating
data scarcity, including augmenting the training dataset with code data or
removing commonly used filters. Models and datasets from our 400 training runs
are freely available at https://github.com/huggingface/datablations.Comment: 47 pages (9 main), 37 figures, 13 table
FinGPT: Large Generative Models for a Small Language
Large language models (LLMs) excel in many tasks in NLP and beyond, but most
open models have very limited coverage of smaller languages and LLM work tends
to focus on languages where nearly unlimited data is available for pretraining.
In this work, we study the challenges of creating LLMs for Finnish, a language
spoken by less than 0.1% of the world population. We compile an extensive
dataset of Finnish combining web crawls, news, social media and eBooks. We
pursue two approaches to pretrain models: 1) we train seven monolingual models
from scratch (186M to 13B parameters) dubbed FinGPT, 2) we continue the
pretraining of the multilingual BLOOM model on a mix of its original training
data and Finnish, resulting in a 176 billion parameter model we call BLUUMI.
For model evaluation, we introduce FIN-bench, a version of BIG-bench with
Finnish tasks. We also assess other model qualities such as toxicity and bias.
Our models and tools are openly available at https://turkunlp.org/gpt3-finnish.Comment: 17 pages (10 main), 7 figures, 5 table
Masader Plus: A New Interface for Exploring +500 Arabic NLP Datasets
Masader (Alyafeai et al., 2021) created a metadata structure to be used for
cataloguing Arabic NLP datasets. However, developing an easy way to explore
such a catalogue is a challenging task. In order to give the optimal experience
for users and researchers exploring the catalogue, several design and user
experience challenges must be resolved. Furthermore, user interactions with the
website may provide an easy approach to improve the catalogue. In this paper,
we introduce Masader Plus, a web interface for users to browse Masader. We
demonstrate data exploration, filtration, and a simple API that allows users to
examine datasets from the backend. Masader Plus can be explored using this link
https://arbml.github.io/masader. A video recording explaining the interface can
be found here https://www.youtube.com/watch?v=SEtdlSeqchk
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Large language models (LLMs) have been shown to be able to perform new tasks
based on a few demonstrations or natural language instructions. While these
capabilities have led to widespread adoption, most LLMs are developed by
resource-rich organizations and are frequently kept from the public. As a step
towards democratizing this powerful technology, we present BLOOM, a
176B-parameter open-access language model designed and built thanks to a
collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer
language model that was trained on the ROOTS corpus, a dataset comprising
hundreds of sources in 46 natural and 13 programming languages (59 in total).
We find that BLOOM achieves competitive performance on a wide variety of
benchmarks, with stronger results after undergoing multitask prompted
finetuning. To facilitate future research and applications using LLMs, we
publicly release our models and code under the Responsible AI License
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License